1 Overview of the tutorial

Using the data from the Nanopore sequencing you conducted,

Now the following sessions, we aim to:

  • Compare the read length and quality between the experimental conditions

  • Investigate how the read cleaning process affect the read yields

  • Identify the genomic variants from the sequence data

  • Interpret how the genomic variants affects the animal biologically

And we will learn:

  • How to use Orion and conduct genome analysis

  • Quality check, read filtering, mapping to the reference genome and variant calling

  • How to interpret summary statistics of Nanopore sequence data

  • How to interpret genetic variants

Using Orion

Prepare your computational tools with Conda

2 Reads quality check

Overview: (1) Quality check -> Trimming of low quality reads -> Quality check

  1. Compare the overall reads quality between four conditions

2.1 Connect to Orion and the prepare the tools

Go to https://orion.nmbu.no/ at NMBU or with VPN.

In the Terminal/Command prompt, go to your directory. Review: the concept of current directry

cd your_directory

Let’s make a directory for analysis and enter in it.

mkdir analysis # make directory "analysis"
cd analysis # set the current directory "analysis"

Now, you will inspect the fastq file from your experiment, which contains Nanopore read information.

2.2 Check the read quality by Nanoplot

2.2.1 Browse the inside of the read (fastq) file

Review: look into a file content in a command line

Let’s learn how a fastq file (sequencing reads) looks using a sample file.

zcat /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | more

Now you are seeing the content of a fastq file. (.gz = compressed)

Each entry in a FASTQ files consists of 4 lines:

  1. A sequence identifier with information about the sequencing run. (run time, run ID, cflow cell id … )

2.The sequence (the base calls; A, C, T, G and N).

  1. A separator, which is simply a plus (+) sign.

  2. The base call quality scores. These are Phred +33 encoded, using ASCII characters to represent the numerical quality scores.” quality score sheet

2.2.2 Get basic stats of the fastq file

“zcat”-> look inside

“wc” -> word count

“-l” -> line

zcat /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | wc -l

2.2.3 Discussion Point

Now you got the number of lines in the fastq file.

How many sequence reads are in the fastq file?

What is the quality of the first 5bp ? what is the quality of bp between XX and YY ? Why do you think they are different ?

Need Help?

We see that there are 96000 lines in the fastq file.

As we learned that “each entry in a FASTQ files consists of 4 lines”, one read is corresponding to four lines. So in this file we have 96000/4 = 24000 reads.

2.3 Quality check by Nanoplot

The original fastq files may contain low quality reads. In this step, we will use “Nanoplot” to see the quality and lentgh of each read.

Make a slurm script to conduct the quality check on the sample file like below and run it.

Review: make a slurm script

Review: run a slurm script by sbatch sbatch


#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END

##Activate conda environment
module load Miniconda3 && eval "$(conda shell.bash hook)"

### NB! Remember to use your own conda environment:

conda activate $SCRATCH/ToolBox/EUKVariantDetection 
##Activating conda environments
conda activate $COURSES/BIO326/BestPracticesOrion/BLASTConda
echo "Working with this $CONDA_PREFIX environmet ..."

NanoPlot -t 8  --fastq /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz --plots dot   --no_supplementary --no_static --N50 -p before

Nanoplot will generate the result files, named “before”xxx. Lets look into them…

Review: File transfer between Orion and your computer


# taking too long?
qlogin 

cp /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/beforeNanoPlot-report.html beforeNanoPlot-report.html

Open “beforeNanoPlot-report.html” on your local computer

##Filtering by Nanofilt


#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --mail-type=END

##Activate conda environment
module load Miniconda3 && eval "$(conda shell.bash hook)"

### NB! Remember to use your own conda environment:

conda activate $SCRATCH/ToolBox/EUKVariantDetection 
##Activating conda environments
conda activate $COURSES/BIO326/BestPracticesOrion/BLASTConda
echo "Working with this $CONDA_PREFIX environmet ..."


gunzip -c /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | NanoFilt -q 10 -l 500 | gzip > cleaned.pig.fastq.gz

-l, Filter on a minimum read length

-q, Filter on a minimum average read quality score

In this case, we are removing reads lower than quality score 10 and shorter than 500 bases.

If you are ambitious, please adjust the filtering parameters and see how they change the result.

(In that case, do not forget to name the result files differently.)

2.4 Compare the sequences before and after cleaning

Run Nanoplot again on the cleaned sequences.

Need help?

#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END

##Activate conda environment
module load Miniconda3 && eval "$(conda shell.bash hook)"

### NB! Remember to use your own conda environment:

conda activate $SCRATCH/ToolBox/EUKVariantDetection 
##Activating conda environments
conda activate $COURSES/BIO326/BestPracticesOrion/BLASTConda
echo "Working with this $CONDA_PREFIX environmet ..."


NanoPlot -t 8  --fastq cleaned.pig.fastq.gz  --N50  --no_supplementary --no_static  --plots dot   -p after

Open “afterNanoPlot-report.html” on your local computer.


# taking too long?
qlogin 

cp /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/afterNanoPlot-report.html afterNanoPlot-report.html

2.4.1 Discussion Point

Did you see the difference of read and quality distribution between before and after the filtering?

2.5 Your mission

Do the quality check and filtering, and compare the read length and quality between the four experimental conditions.

for teachers: please make four input fastq files, merging multiple fastq files under the four conditions

For teachers: please make ready-made result files and specify the location Everything you need in case scripts do not work well… (script and result files)

3 Mapping to the reference genome

3.1 run Minimap and map the reads to the reference genome

for teachers: please download the pig genome to the course directory and edit in the following script pig genome

#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END

##Activate conda environment
module load Miniconda3 && eval "$(conda shell.bash hook)"

### NB! Remember to use your own conda environment:

conda activate $SCRATCH/ToolBox/EUKVariantDetection 
##Activating conda environments
conda activate $COURSES/BIO326/BestPracticesOrion/BLASTConda
echo "Working with this $CONDA_PREFIX environmet ..."


minimap2 -t 8 -a -a ref.fa.gz  cleaned.control.fastq.gz > pig.sam


# convert the sam file to bam format
singularity exec  /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools view -S -b pig.sam > pig0.bam

## sort the bam file
singularity exec  /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools sort pig0.bam -o pig.bam

# index the bam file
singularity exec  /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools index -M  pig.bam


# Variant Calling using Sniffles
singularity exec /cvmfs/singularity.galaxyproject.org/all/sniffles:2.0.7--pyhdfd78af_0 sniffles --input  pig.bam --vcf pig.vcf

for teachers - please make a ready-made vcf file just in case

Now you got the variant file!

4 Investigate the variants

  • Celian will explain how to read a vcf file Go to mentimeter (note to self : put link here)
# INFO field

grep '^##' pig.vcf | tail -n 20

# variants
grep -v '^##' pig.vcf | more

Important parameters

1 16849578 : location of the variant

SVTYPE=DEL;SVLEN=-60 : size and type of the variant

0/1 : genotype

(you can open a vcf file in notepad, excel etc.)

Now you have variants! Lets see what genes are affected by the variants.

First we will select a random variant to investigate

#Check the number of variant in the file

NBVAR=$(bcftools index -n pig.vcf)

## sample a random number

RANDOMVAR=$(echo $((RANDOM % $NBVAR + 1)))

## let's check the variant sampled

bcftools view -H pig.vcf | sed -n ${RANDOMVAR}p

4.1 Estimate the effect of variants

Go to VEP (Variant Effect Predictor)

Variant Effect predictor tells us where in the genome the discovered variants are located (genic, regulatory, etc…)

Select “pig” as the reference species.

Upload: pig.vcf - downloaded from Orion or the section above as the file to investigate.

There are 428 variants; 88 genes are affected by these varaints.

What are the most affected genes?

Click “Filters” and set “Impact is HIGH” to select highly impact variants.

There are some frameshift/transcript ablation variants.

Let’s closely investigate your variant !

Find your variant by downloading the .txt file